22 research outputs found

    A Novel Production Workflow and Toolset for Opera Co-creation Towards Enhanced Societal Inclusion of People

    Get PDF
    Opera uses all the visual and performing arts tocreate extraordinary worlds of passion and sensibility. It is rightly recognised as a great achievement of European culture. And yet a form that once inspired social and artistic revolutions is often seen as the staid preserve of the elite. With rising inequality and social exclusion, many see opera-if they think of it at all-as symbolic of what is wrong in Europe today. This paper presents technological and scientific approach of the European H2020 TRACTION project that aims to use opera as a path for social and cultural inclusion, making it once again a force for radicaltransformation. TRACTION wants to define new forms of artistic creation through which the most marginalised groups (e.g. migrants, the rural poor, young offenders and others) can work with artists to tell the stories that matter now. By combining best practices in participatory art with media technology's innovations of language, form and process, the project is defining new approaches to co-creation and innovation, exploring audiovisual formats based in european cultural heritage, such as opera

    Predictive CDN Selection for Video Delivery Based on LSTM Network Performance Forecasts and Cost-Effective Trade-Offs

    Get PDF
    Owing to increasing consumption of video streams and demand for higher quality content and more advanced displays, future telecommunication networks are expected to outperform current networks in terms of key performance indicators (KPIs). Currently, content delivery networks (CDNs) are used to enhance media availability and delivery performance across the Internet in a cost-effective manner. The proliferation of CDN vendors and business models allows the content provider (CP) to use multiple CDN providers simultaneously. However, extreme concurrency dynamics can affect CDN capacity, causing performance degradation and outages, while overestimated demand affects costs. 5G standardization communities envision advanced network functions executing video analytics to enhance or boost media services. Network accelerators are required to enforce CDN resilience and efficient utilization of CDN assets. In this regard, this study investigates a cost-effective service to dynamically select the CDN for each session and video segment at the Media Server, without any modification to the video streaming pipeline being required. This service performs time series forecasts by employing a Long Short-Term Memory (LSTM) network to process real time measurements coming from connected video players. This service also ensures reliable and cost-effective content delivery through proactive selection of the CDN that fits with performance and business constraints. To this end, the proposed service predicts the number of players that can be served by each CDN at each time; then, it switches the required players between CDNs to keep the (Quality of Service) QoS rates or to reduce the CP's operational expenditure (OPEX). The proposed solution is evaluated by a real server, CDNs, and players and delivering dynamic adaptive streaming over HTTP (MPEG-DASH), where clients are notified to switch to another CDN through a standard MPEG-DASH media presentation description (MPD) update mechanismThis work was supported in part by the EC projects Fed4Fire+, under Grant 732638 (H2020-ICT-13-2016, Research and Innovation Action), and in part by Open-VERSO project (Red Cervera Program, Spanish Government's Centre for the Development of Industrial Technology

    A CNN-based Framework for Enhancing 360° VR Experiences with Multisensorial Effects

    Get PDF
    Improving user experience during the delivery of immersive content is crucial for its success for both the content creators and audience. Creators can express themselves better with multisensory stimulation, while the audience can experience a higher level of involvement. The rapid development of mulsemedia devices provides better access for stimuli such as olfaction and haptics. Nevertheless, due to the required manual annotation process of adding mulsemedia effects, the amount of content available with sensorial effects is still limited. This work introduces an innovative mulsemedia-enhancement solution capable of automatically generating olfactory and haptic content based on 360° video content, with the use of neural networks. Two parallel neural networks are responsible for automatically adding scents to 360° videos: a scene detection network (responsible for static, global content) and an action detection network (responsible for dynamic, local content). A 360° video dataset with scent labels is also created and used for evaluating the robustness of the proposed solution. The solution achieves a 69.19% olfactory accuracy and 72.26% haptics accuracy during evaluation using two different datasets

    Dataset of user interactions across four large pilots on the use of augmented reality in learning experiences

    Get PDF
    Augmented Reality in education can support students in a wide range of cognitive tasks–fostering understanding, remembering, applying, analysing, evaluating, and creating learning-relevant information more easily. It can help keep up engagement, and it can render learning more fun. Within the framework of a multi-year investigation encompassing primary and secondary schools across Europe, the ARETE project developed several Augmented Reality applications, providing tools for user interaction and data collection in the education sector. The project developed innovative AR learning technology and methodology, validating these in four comprehensive pilot studies, in total involving more than 2,900 students and teachers. Each pilot made use of a different Augmented Reality application covering specific subjects (English literacy skills, Mathematics and Geography, Positive Behaviour, plus, additionally, an Augmented Reality authoring tool applied in a wide range of subjects). In this paper, we introduce the datasets collected during the pilots, describe how the data enabled the validation of the technology, and how the approach chosen could enhance existing augmented reality applications in data exploration and modelling

    A novel production workflow and toolset for opera co-creation towards enhanced societal inclusion of people

    Get PDF
    Opera uses all the visual and performing arts to create extraordinary worlds of passion and sensibility. It is rightly recognised as a great achievement of European culture. And yet a form that once inspired social and artistic revolutions is often seen as the staid preserve of the elite. With rising inequality and social exclusion, many see opera—if they think of it at all—as symbolic of what is wrong in Europe today. This paper presents the technological and scientific approach of the European H2020 TRACTION project that aims to use opera as a path for social and cultural inclusion, making it once again a force for radical transformation. TRACTION wants to define new forms of artistic creation through which the most marginalised groups (e.g. migrants, the rural poor, young offenders and others) can work with artists to tell the stories that matter now. By combining best practices in participatory art with media technology’s innovations of language, form and process, the project is defining new approaches to co-creation and innovation, exploring novel audiovisual formats based in European cultural heritage, such as opera

    Co-creation stage: A web-based tool for collaborative and participatory co-located art performances

    Get PDF
    In recent years, artists and communities have expressed the desire to work with tools that facilitate co-creation and allow distributed community performances. These performances can be spread over several physical stages, connecting them on real-time towards a single experience with the audience distributed along them. This enables a wider remote audience consuming the performance through their own devices, and even grants the participation of remote users in the show. In this paper we introduce the Co-creation Stage, a web-based tool that allows managing heterogeneous content sources, with a particular focus on live and on-demand media, across several distributed devices. The Co-creation Stage is part of the toolset developed in the Traction H2020 project which enables community performing art shows, where professional artists and non-professional participants perform together from different stages and locations. Here we present the design process, the architecture and the main functionaliti

    The co-creation space: Supporting asynchronous artistic co-creation dynamics

    Get PDF
    Artistic co-creation empowers communities to shape their narratives, however HCI research does not support this multifaceted discussion and reflection process. In the context of community opera, we consider how to support co-creation through the design, implementation, and initial evaluation of the Co-Creation Space (CCS) to help community artists 1) generate raw artistic ideas, and 2) discuss and reflect on the shared meaning of those ideas. This work describes our user-centered process to gather requirements and design the tool, and validates its' usability with 6 community opera participants. Our findings support the value of our tool for group discussion and personal reflection during the creative process

    Diarizing Large Corpora using Multi-modal Speaker Linking

    Get PDF
    Abstract Speaker diarization of a collection of recordings with uniquely identified speakers is a challenging task. A system addressing such task must account for the inter-session variability present from recording to recording and it is asked to scale well to massive amounts of data. In this paper we use a two-stage approach to corpus-wide speaker diarization involving speaker diarization and speaker linking stages. The speaker linking system agglomeratively clusters speaker factor posterior distributions obtained via Joint Factor Analysis using the Ward method and the Hotteling t-square statistic as distance measure. We extend this framework to link speakers based on both speech and visual modalities to improve the robustness of the system. The system is evaluated using the data collected for the Augmented Multiparty Interaction (AMI) project, involving over one hundred meetings. We provide results in terms of within-recording and across-recording diarization error rates (DER) to support the effectiveness of multi-modal speaker linking to enable large scale speaker diarization
    corecore